human perceptual system
Metamers of neural networks reveal divergence from human perceptual systems
Deep neural networks have been embraced as models of sensory systems, instantiating representational transformations that appear to resemble those in the visual and auditory systems. To more thoroughly investigate their similarity to biological systems, we synthesized model metamers - stimuli that produce the same responses at some stage of a network's representation. We generated model metamers for natural stimuli by performing gradient descent on a noise signal, matching the responses of individual layers of image and audio networks to a natural image or speech signal. The resulting signals reflect the invariances instantiated in the network up to the matched layer. We then measured whether model metamers were recognizable to human observers - a necessary condition for the model representations to replicate those of humans.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- South America > Paraguay > Asunción > Asunción (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (3 more...)
Reviews: Metamers of neural networks reveal divergence from human perceptual systems
My review for this work remains the same following author response because I believe that the authors have demonstrated this work to be of high quality and relevance to the NeurIPS community. Originality: Although the algorithms used to synthesize the metamers themselves are nothing new, the work is a novel combination of previous approaches and techniques, and the analysis approach gives these methods a fresh perspective that leads to good insights. Quality: The work is of high quality and is a complete piece of work that will advance our understanding of the relationships between architecture, task and training in determining representational similarity between networks and humans (as well as between networks). Clarity: The paper is overall clear, though some details could use a bit of clarifying (what was the threshold for satisfactory termination of synthesis? Significance: This work builds on theoretical and experimental work from neuroscience used to analyze how well models of perceptual systems capture the representation within the human brain by synthesizing stimuli that match the responses of some part of the model completely and using human subjects to validate that the original and matched stimulus are in fact the same.
Metamers of neural networks reveal divergence from human perceptual systems
Deep neural networks have been embraced as models of sensory systems, instantiating representational transformations that appear to resemble those in the visual and auditory systems. To more thoroughly investigate their similarity to biological systems, we synthesized model metamers – stimuli that produce the same responses at some stage of a network's representation. We generated model metamers for natural stimuli by performing gradient descent on a noise signal, matching the responses of individual layers of image and audio networks to a natural image or speech signal. The resulting signals reflect the invariances instantiated in the network up to the matched layer. We then measured whether model metamers were recognizable to human observers – a necessary condition for the model representations to replicate those of humans.
Metamers of neural networks reveal divergence from human perceptual systems
Feather, Jenelle, Durango, Alex, Gonzalez, Ray, McDermott, Josh
Deep neural networks have been embraced as models of sensory systems, instantiating representational transformations that appear to resemble those in the visual and auditory systems. To more thoroughly investigate their similarity to biological systems, we synthesized model metamers – stimuli that produce the same responses at some stage of a network's representation. We generated model metamers for natural stimuli by performing gradient descent on a noise signal, matching the responses of individual layers of image and audio networks to a natural image or speech signal. The resulting signals reflect the invariances instantiated in the network up to the matched layer. We then measured whether model metamers were recognizable to human observers – a necessary condition for the model representations to replicate those of humans.
Differences between deep neural networks and human perception
When your mother calls your name, you know it's her voice -- no matter the volume, even over a poor cell phone connection. And when you see her face, you know it's hers -- if she is far away, if the lighting is poor, or if you are on a bad FaceTime call. This robustness to variation is a hallmark of human perception. On the other hand, we are susceptible to illusions: We might fail to distinguish between sounds or images that are, in fact, different. Scientists have explained many of these illusions, but we lack a full understanding of the invariances in our auditory and visual systems.
Towards Imperceptible and Robust Adversarial Example Attacks Against Neural Networks
Luo, Bo (The Chinese University of Hong Kong) | Liu, Yannan (The Chinese University of Hong Kong) | Wei, Lingxiao (The Chinese University of Hong Kong) | Xu, Qiang (The Chinese University of Hong Kong)
Machine learning systems based on deep neural networks, being able to produce state-of-the-art results on various perception tasks, have gained mainstream adoption in many applications. However, they are shown to be vulnerable to adversarial example attack, which generates malicious output by adding slight perturbations to the input. Previous adversarial example crafting methods, however, use simple metrics to evaluate the distances between the original examples and the adversarial ones, which could be easily detected by human eyes. In addition, these attacks are often not robust due to the inevitable noises and deviation in the physical world. In this work, we present a new adversarial example attack crafting method, which takes the human perceptual system into consideration and maximizes the noise tolerance of the crafted adversarial example. Experimental results demonstrate the efficacy of the proposed technique.
Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
Luo, Bo, Liu, Yannan, Wei, Lingxiao, Xu, Qiang
Machine learning systems based on deep neural networks, being able to produce state-of-the-art results on various perception tasks, have gained mainstream adoption in many applications. However, they are shown to be vulnerable to adversarial example attack, which generates malicious output by adding slight perturbations to the input. Previous adversarial example crafting methods, however, use simple metrics to evaluate the distances between the original examples and the adversarial ones, which could be easily detected by human eyes. In addition, these attacks are often not robust due to the inevitable noises and deviation in the physical world. In this work, we present a new adversarial example attack crafting method, which takes the human perceptual system into consideration and maximizes the noise tolerance of the crafted adversarial example. Experimental results demonstrate the efficacy of the proposed technique.
Analyzing human feature learning as nonparametric Bayesian inference
Griffiths, Thomas L., Austerweil, Joseph L.
Almost all successful machine learning algorithms and cognitive models require powerful representations capturing the features that are relevant to a particular problem. We draw on recent work in nonparametric Bayesian statistics to define a rational model of human feature learning that forms a featural representation from raw sensory data without pre-specifying the number of features. By comparing how the human perceptual system and our rational model use distributional and category information to infer feature representations, we seek to identify some of the forces that govern the process by which people separate and combine sensory primitives to form features.
- North America > United States > California > Alameda County > Berkeley (0.15)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.69)